Subgaussian Tail Bounds via Stability Arguments

نویسندگان

  • Thomas Steinke
  • Jonathan Ullman
چکیده

Sums of independent, bounded random variables concentrate around their expectation approximately as well a Gaussian of the same variance. Well known results of this form include the Bernstein, Hoeffding, and Chernoff inequalities and many others. We present an alternative proof of these tail bounds based on what we call a stability argument, which avoids bounding the moment generating function or higher-order moments of the distribution. Our stability argument is inspired by recent work on the generalization properties of differential privacy and their connection to adaptive data analysis (Bassily et al., STOC 2016).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Some Probability Inequalities for Quadratic Forms of Negatively Dependent Subgaussian Random Variables

In this paper, we obtain the upper exponential bounds for the tail probabilities of the quadratic forms for negatively dependent subgaussian random variables. In particular the law of iterated logarithm for quadratic forms of independent subgaussian random variables is generalized to the case of negatively dependent subgaussian random variables.

متن کامل

Local Tail Bounds for Functions of Independent Random Variables

It is shown that functions defined on {0, 1, . . . , r − 1}n satisfying certain conditions of bounded differences that guarantee subgaussian tail behavior also satisfy a much stronger “local” subgaussian property. For self-bounding and configuration functions we derive analogous locally subexponential behavior. The key tool is Talagrand’s (1994) variance inequality for functions defined on the ...

متن کامل

Exponential Bounds and Tails for Additive Random Recursive Sequences

Exponential bounds and tail estimates are derived for additive random recursive sequences, which typically arise as functionals of recursive structures, of random trees or in recursive algorithms. In particular they arise as parameters of divide and conquer type algorithms. We derive tail bounds from estimates of the Laplace transforms and of the moment sequences. For the proof we use some clas...

متن کامل

Sharp Nonasymptotic Bounds on the Norm of Random Matrices with Independent Entries by Afonso

This bound is optimal in the sense that a matching lower bound holds under mild assumptions, and the constants are sufficiently sharp that we can often capture the precise edge of the spectrum. Analogous results are obtained for rectangular matrices and for more general subgaussian or heavy-tailed distributions of the entries, and we derive tail bounds in addition to bounds on the expected norm...

متن کامل

Learning sub-Gaussian classes : Upper and minimax bounds

Most the results contained in this note have been presented at the SMF meeting, which took place in May 2011; the rest have been obtained shortly after the time of the meeting. The question we study has to do with the optimality of Empirical Risk Minimization as a learning procedure in a convex class – when the problem is subgaussian. Subgaussian learning problems are a natural object because t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1701.03493  شماره 

صفحات  -

تاریخ انتشار 2017